1,614 research outputs found

    The 3D model control of image processing

    Get PDF
    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator

    Visual enhancements in pick-and-place tasks: Human operators controlling a simulated cylindrical manipulator

    Get PDF
    A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements

    A helmet mounted display to adapt the telerobotic environment to human vision

    Get PDF
    A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects

    Eye movement control and top-down scanpath vision as a design metaphor for robotic vision and control

    Get PDF
    The topics covered include the following: eye movement control; higher level eye movement control and Scanpath Theory; top-down robotic vision; and overall robotic control scheme

    Instrumentation and robotic image processing using top-down model control

    Get PDF
    A top-down image processing scheme is described. A three-dimensional model of a robotic working environment, with robot manipulators, workpieces, cameras, and on-the-scene visual enhancements is employed to control and direct the image processing, so that rapid, robust algorithms act in an efficient manner to continually update the model. Only the model parameters are communicated, so that savings in bandwidth are achieved. This image compression by modeling is especially important for control of space telerobotics. The background for this scheme lies in an hypothesis of human vision put forward by the senior author and colleagues almost 20 years ago - the Scanpath Theory. Evidence was obtained that repetitive sequences of saccadic eye movements, the scanpath, acted as the checking phase of visual pattern recognition. Further evidence was obtained that the scanpaths were apparently generated by a cognitive model and not directly by the visual image. This top-down theory of human vision was generalized in some sense to the frame in artificial intelligence. Another source of the concept arose from bioengineering instrumentation for measuring the pupil and eye movements with infrared video cameras and special-purpose hardware

    Three dimensional tracking with misalignment between display and control axes

    Get PDF
    Human operators confronted with misaligned display and control frames of reference performed three dimensional, pursuit tracking in virtual environment and virtual space simulations. Analysis of the components of the tracking errors in the perspective displays presenting virtual space showed that components of the error due to visual motor misalignment may be linearly separated from those associated with the mismatch between display and control coordinate systems. Tracking performance improved with several hours practice despite previous reports that such improvement did not take place

    Vernon Lidtke: A Tribute

    Get PDF

    Autonomous Image Processing Algorithms Locate Region-of-Interests: The Mars Rover Application

    Get PDF
    In this report, we demonstrate that bottom-up IPA's, image-processing algorithms, can perform a new visual task to select and locate Regions-Of-Interests (ROIs). This task has been defined on the basis of a theory of top-down human vision, the scanpath theory. Further, using measures, Sp and Ss, the similarity of location and ordering, respectively, developed over the years in studying human perception and the active looking role of eye movements, we could quantify the efficient and efficacious manner that IPAs can imitate human vision in located ROIS. The means to quantitatively evaluate IPA performance has been an important part of our study. In fact, these measures were essential in choosing from the initial wide variety of IPAS, that particular one that best serves for a type of picture and for a required task. It should be emphasized that the selection of efficient IPAs has depended upon their correlation with actual human chosen ROIs for the same type of picture and for the same required task accomplishment

    Development of visual 3D virtual environment for control software

    Get PDF
    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering
    corecore